gusucode.com > 支持向量机工具箱 - LIBSVM OSU_SVM LS_SVM源码程序 > 支持向量机工具箱 - LIBSVM OSU_SVM LS_SVM\stprtool\svm\svmmot.m
function [Alpha,bias,nsv,exitflag,flps,margin,trn_err]=... svmmot(X,I,ker,arg,C) % SVMMOT learns SVM (L1) classifier using Matlab Optimization Toolbox. % [Alpha,bias,nsv,eflag,flps,margin,trn_err]=svmmot(X,I,ker,arg,C) % % SVMMOT solves binary Support Vector Machines problem with % L1 soft margin (classification violations are linearly % penalized) by the use of Matlab Optimization Toolbox. % % For classification use SVMCLASS. % % Mandatory inputs: % X [NxL] L training patterns in N-dimensional space. % I [1xL] labels of training patterns (1 - 1st, 2 - 2nd class ). % ker [string] identifier of kernel, see help kernel. % arg [...] arguments of given kernel, see help kernel. % C [real] trade-off between margin and training error. % % Outputs: % Alpha [1xL] Lagrange multipliers for training patterns. % bias [real] bias of decision function. % nsv [uint] number of Support Vectors, i.e number of patterns % with non-zero ( > epsilon) Lagrangians. % eflag [int] exit flag of qp function, see 'help qp'. % flps [uint] number of used floating point operations. % margin [real] margin between classes. % trn_err [real] training error (empirical risk). % % See also SVMCLASS, SVM. % % Statistical Pattern Recognition Toolbox, Vojtech Franc, Vaclav Hlavac % (c) Czech Technical University Prague, http://cmp.felk.cvut.cz % Modifications % 28-Nov-2001, V.F. used quadprog instead of qp % 23-Occt-2001, V.F. % 19-September-2001, V. Franc, renamed to svmmot. % 8-July-2001, V.Franc, comments changed, bias mistake removed. % 28-April-2001, V.Franc, flps counter added % 10-April-2001, V. Franc, created flops(0); % reset counter of floating point operations % small diagonal to make kernel matrix positive definite ADD_DIAG = 1e-9; % numerical precision for SVs, if Alpha(i) > epsilon % then i-th pattern is Support Vector. Default is 1e-9. SV_LIMIT = 1e-9; if nargin < 5, error('Not enough number of input arguments.'); return; end % get dimension and number of points [N,L] = size(X); % labels {1,2} -> {1,-1} Y = itosgn( I ); % compute kernel matrix K = kernel(X,X,ker,arg); K=K.*(Y'*Y); % add small numbers on diagonal to make T positive definite K = K + ADD_DIAG*eye(size(K)); % transform the SVM problem to the Optimization toolbox format: % % SVM (Wolf dual) problem: % max Alpha*ones(1,L) - 0.5*Alpha'*K*Alpha % Alpha % % subject to: % 0 <= Alpha <= C % Alpha*Y' = 0 % % Quadratic programming in the Optimization toolbox % min 0.5*x'*H*x + f'*x % x % % subject to: % Aeq*x = beq, % x <= UB, % LB < x Aeq = Y; beq = 0; f = -ones(L,1); % Alpha LB = zeros(L,1); % 0 <= Alpha UB = C*ones(L,1); % Alpha <= C x0 = zeros(L,1); % starting point % call optimization toolbox %[Alpha,fval,exitflag] = qp(K, f, Aeq, beq, LB, UB, x0, 1); [Alpha,fval,exitflag] = quadprog(K, f, [],[],Aeq, beq, LB, UB, x0,... optimset('Display','off')); % find the support vectors sv_inx = find( Alpha > SV_LIMIT); nsv = length(sv_inx); % compute average value of the bias from the points on the margin, % the patterns on the margin have Alpha: 0 < Alpha(i) < C sv_margin = find( Alpha > SV_LIMIT & Alpha < (C - SV_LIMIT)); nsv_margin = length( sv_margin ); if nsv_margin ~= 0, bias = sum(Y(sv_margin)'-K(sv_margin,sv_inx)*Alpha(sv_inx)... .*Y(sv_margin)')/nsv_margin; else disp('Bias cannot be determinend - no patterns on margin.'); end Alpha=Alpha(:)'; % Alpha will be a row vector % compute margin between classes = 1/norm(w) if nargout >= 6, margin = 1/sqrt(Alpha*K*Alpha'); end % compute training classification error (empirical risk) if nargout >= 7, K=K.*(Y'*Y); fpred = K*(Alpha(:).*Y(:)) + ones(length(Y),1)*bias; trn_err = length( find( (fpred(:).*Y(:)) < 0))/length(Y); end flps =flops; % take number of used floating point operations return;